632 research outputs found

    Classifying motion states of AUV based on graph representation for multivariate time series

    Get PDF
    Acknowledgement This work is supported by Natural Science Foundation of Shandong Province (ZR2020MF079) and China Scholarship Council (CSC).Peer reviewedPostprin

    DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization

    Full text link
    DL compiler's primary function is to translate DNN programs written in high-level DL frameworks such as PyTorch and TensorFlow into portable executables. These executables can then be flexibly executed by the deployed host programs. However, existing DL compilers rely on a tracing mechanism, which involves feeding a runtime input to a neural network program and tracing the program execution paths to generate the computational graph necessary for compilation. Unfortunately, this mechanism falls short when dealing with modern dynamic neural networks (DyNNs) that possess varying computational graphs depending on the inputs. Consequently, conventional DL compilers struggle to accurately compile DyNNs into executable code. To address this limitation, we propose \tool, a general approach that enables any existing DL compiler to successfully compile DyNNs. \tool tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process. Specifically, \tool develops program analysis and program transformation techniques to convert a dynamic neural network into multiple sub-neural networks. Each sub-neural network is devoid of conditional statements and is compiled independently. Furthermore, \tool synthesizes a host module that models the control flow of the DyNNs and facilitates the invocation of the sub-neural networks. Our evaluation demonstrates the effectiveness of \tool, achieving a 100\% success rate in compiling all dynamic neural networks. Moreover, the compiled executables generated by \tool exhibit significantly improved performance, running between 1.12×1.12\times and 20.21×20.21\times faster than the original DyNNs executed on general-purpose DL frameworks.Comment: This paper has been accepted to ISSTA 202

    The changes in fractal dimension after a maximal exertion in swimming

    Get PDF
    Quite often linear variables are not sensitive enough to explain the changes in the motor behavior of elite athletes. So, non-linear variables should be selected. The aim was to compare the fractal dimension before and after a maximal bout swimming front-crawl. Twenty-four subjects performed an all-out 100m trial swimming front-crawl. Immediately before (Pre-test) and after the trial (Post-test) a speed-meter cable was attached to the swimmer’s waist to measure the hip speed from which fractal dimension was derived. The fractal dimension showed a significant decrease with a moderate effect size between pre- and post-tests. Twenty-one out of 24 swimmers decreased the fractal dimension. As a conclusion, there is a decrease in the fractal dimension and hence in the swimming behavior complexity being under fatigue after a maximal trial.This research was funded by the grant NIE AcRF 11/13 TB.info:eu-repo/semantics/publishedVersio

    Changes in classical kinematics and non‐linear parameters after a maximal 100‐m front‐crawl bout

    Get PDF
    In a linear system there is proportionality between input and output. Under this framework it is expected that the amount of change in sports performance must be proportional to variations in the inputs.info:eu-repo/semantics/publishedVersio

    Changes in classical kinematics and non-linear parameters after a maximal 100-m front-crawl bout

    Get PDF
    In a linear system there is proportionality between input and output. Under this framework it is expected that the amount of change in sports performance must be proportional to variations in the inputs. However, as far as elite performance goes, this is not a straightforward assumption. Sometimes the variables selected are not sensitive enough. Hence, there is the need of having non-linear concepts underpinning such analysis. The aim was to compare classical kinematics and non-linear parameters after a maximal 100-m front-crawl bout. Twenty-four subjects (12 males and 12 females; 22.38±1.68-y) were invited to perform a 100-m freestyle race at maximal pace. Before (pre-test, i.e. rested) and immediately after (post-test, i.e. under fatigue) the maximal bout, they performed two maximal 25m swims at freestyle with push-off start. A speedo-meter cord (Swim speedo-meter, Swimsportec, Hildesheim, Germany) was attached to the swimmer’s hip (Barbosa et al., 2015) in the two 25m trials collecting the instantaneous speed. It was computed the speed fluctuation (dv; Barbosa et al., 2015), approximate entropy (ApEn; Barbosa et al., 2015) and fractal dimension (FD; Higuchi, 1988). Repeated measures ANOVAs (pre-test vs. post-test; P≀0.05), effect sizes (eta squared) and 95% of confidence intervals (95CI) were computed. The speed was 1.44±0.24 and 1.28±0.23m/s in the pre- and post/test, respectively (F=55.136, P<0.001)info:eu-repo/semantics/publishedVersio

    NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems

    Full text link
    Neural Machine Translation (NMT) systems have received much recent attention due to their human-level accuracy. While existing works mostly focus on either improving accuracy or testing accuracy robustness, the computation efficiency of NMT systems, which is of paramount importance due to often vast translation demands and real-time requirements, has surprisingly received little attention. In this paper, we make the first attempt to understand and test potential computation efficiency robustness in state-of-the-art NMT systems. By analyzing the working mechanism and implementation of 1455 public-accessible NMT systems, we observe a fundamental property in NMT systems that could be manipulated in an adversarial manner to reduce computation efficiency significantly. Our key motivation is to generate test inputs that could sufficiently delay the generation of EOS such that NMT systems would have to go through enough iterations to satisfy the pre-configured threshold. We present NMTSloth, which develops a gradient-guided technique that searches for a minimal and unnoticeable perturbation at character-level, token-level, and structure-level, which sufficiently delays the appearance of EOS and forces these inputs to reach the naturally-unreachable threshold. To demonstrate the effectiveness of NMTSloth, we conduct a systematic evaluation on three public-available NMT systems: Google T5, AllenAI WMT14, and Helsinki-NLP translators. Experimental results show that NMTSloth can increase NMT systems' response latency and energy consumption by 85% to 3153% and 86% to 3052%, respectively, by perturbing just one character or token in the input sentence. Our case study shows that inputs generated by NMTSloth significantly affect the battery power in real-world mobile devices (i.e., drain more than 30 times battery power than normal inputs).Comment: This paper has been accepted to ESEC/FSE 202

    The changes in classical and nonlinear parameters after a maximal bout to elicit fatigue in competitive swimming

    Get PDF
    The aim was to assess the effect of fatigue on linear and nonlinear parameters in swimming. Twenty-four fitness-oriented swimmers performed a maximal bout of 100m at front-crawl to elicit fatigue. Before (pre-) and immediately after (post-test) the bout, participants swam an allout 25m to derive the speed fluctuation (dv), approximate entropy (ApEn) and fractal dimension (FD) from the speed-time series collected by a speedo-meter. Swim speed was 10.85% slower in the post-test than in the pre-test (p < .001, η2=0.72). There was an effect of the fatigue on the dv with a moderate effect size. The dv increased shifting the 95CI band from 0.116–0.134 to 0.140–0.161. The ApEn showed non-significant variations between the pre- and post-test having the 95CI of pre- and post-test overlapped (pre: 0.659–0.700; post: 0.641–0.682). The FD showed as well a significant variation (the 95CI moved from 1.954–1.965 to 1.933–1.951). It can be concluded that in swimming there are changes in classical and nonlinear parameters under fatigue.This research was funded by the NIE AcRF grant (RI 11/13 TB)info:eu-repo/semantics/acceptedVersio
    • 

    corecore